滑动检测对于在外星人表面驾驶的流浪者的安全性和效率至关重要。当前的行星流动站滑移检测系统依赖于视觉感知,假设可以在环境中获得足够的视觉特征。然而,基于视觉的方法容易受到感知降解的行星环境,具有主要低地形特征,例如岩石岩,冰川地形,盐散发物以及较差的照明条件,例如黑暗的洞穴和永久阴影区域。仅依靠视觉传感器进行滑动检测也需要额外的计算功率,并降低了流动站的遍历速率。本文回答了如何检测行星漫游者的车轮滑移而不取决于视觉感知的问题。在这方面,我们提出了一个滑动检测系统,该系统从本体感受的本地化框架中获取信息,该框架能够提供数百米的可靠,连续和计算有效的状态估计。这是通过使用零速度更新,零角度更新和非独立限制作为惯性导航系统框架的伪测量更新来完成的。对所提出的方法进行了对实际硬件的评估,并在行星 - 分析环境中进行了现场测试。该方法仅使用IMU和车轮编码器就可以达到150 m左右的92%滑动检测精度。
translated by 谷歌翻译
在这封信中,我们通过学习两个参数而不是一个以更好地适合剩余分布来提高现有可靠估计算法的适应性。我们的方法使用这两个参数来计算迭代重新加权最小二乘(IRL)的权重。在噪声水平在测量中有所不同的情况下,权重的这种适应性性质被证明是有帮助的,并且显示出可提高异常值的鲁棒性。我们首先在综合数据集的点云注册问题上测试算法,其中已知真相转换。接下来,我们还使用开源激光惯性持续式SLAM软件包评估了该方法,以证明所提出的方法比现有版本的算法更有效,用于应用增量激光持续性探针测定法。我们还分析了从数据集中学到的两个参数的关节变异性。
translated by 谷歌翻译
本文比较了自适应和强大的卡尔曼滤波器算法在改善低特色粗糙地形上改善车轮惯性内径术中的性能。方法包括经典的自适应和鲁棒方法以及变分方法,其在实验上在类似于行星勘探中遇到的地形的轮式漫游器上进行评估。与经典自适应滤光器相比,变分滤波器显示出改善的解决方案精度,并且能够处理错误的车轮测量测量,并保持良好的定位,无需显着漂移。我们还显示参数如何影响本地化性能的变化。
translated by 谷歌翻译
因子图最近被出现为GNSS定位的替代解决方法。在本文中,我们审查了因素图在GNSS中实施了,它们与卡尔曼滤波器的一些优点,以及它们在使定位解决方案更强大地降解测量方面的重要性。我们还讨论了因子图如何成为现场无线电导航社区的重要工具。
translated by 谷歌翻译
在这项工作中,我们展示了基于全球导航卫星系统(GNSS)的零速度信息的重要性。在文献中已经示出了使用零速度更新(Zupt)的零速度信息的有效性已经显示在文献中。在这里,我们利用此信息并将其添加为GNSS因子图中的位置约束。我们还将其性能与GNSS /惯用导航系统(INS)耦合因子图进行比较。我们在三个数据集上测试了我们的Zupt辅助因子图方法,并将其与仅限GNSS因子图进行了比较。
translated by 谷歌翻译
Participants in political discourse employ rhetorical strategies -- such as hedging, attributions, or denials -- to display varying degrees of belief commitments to claims proposed by themselves or others. Traditionally, political scientists have studied these epistemic phenomena through labor-intensive manual content analysis. We propose to help automate such work through epistemic stance prediction, drawn from research in computational semantics, to distinguish at the clausal level what is asserted, denied, or only ambivalently suggested by the author or other mentioned entities (belief holders). We first develop a simple RoBERTa-based model for multi-source stance predictions that outperforms more complex state-of-the-art modeling. Then we demonstrate its novel application to political science by conducting a large-scale analysis of the Mass Market Manifestos corpus of U.S. political opinion books, where we characterize trends in cited belief holders -- respected allies and opposed bogeymen -- across U.S. political ideologies.
translated by 谷歌翻译
While inferring common actor states (such as position or velocity) is an important and well-explored task of the perception system aboard a self-driving vehicle (SDV), it may not always provide sufficient information to the SDV. This is especially true in the case of active emergency vehicles (EVs), where light-based signals also need to be captured to provide a full context. We consider this problem and propose a sequential methodology for the detection of active EVs, using an off-the-shelf CNN model operating at a frame level and a downstream smoother that accounts for the temporal aspect of flashing EV lights. We also explore model improvements through data augmentation and training with additional hard samples.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Knowledge distillation (KD) has gained a lot of attention in the field of model compression for edge devices thanks to its effectiveness in compressing large powerful networks into smaller lower-capacity models. Online distillation, in which both the teacher and the student are learning collaboratively, has also gained much interest due to its ability to improve on the performance of the networks involved. The Kullback-Leibler (KL) divergence ensures the proper knowledge transfer between the teacher and student. However, most online KD techniques present some bottlenecks under the network capacity gap. By cooperatively and simultaneously training, the models the KL distance becomes incapable of properly minimizing the teacher's and student's distributions. Alongside accuracy, critical edge device applications are in need of well-calibrated compact networks. Confidence calibration provides a sensible way of getting trustworthy predictions. We propose BD-KD: Balancing of Divergences for online Knowledge Distillation. We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process. We demonstrate that, by performing this balancing design at the level of the student distillation loss, we improve upon both performance accuracy and calibration of the compact student network. We conducted extensive experiments using a variety of network architectures and show improvements on multiple datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet. We illustrate the effectiveness of our approach through comprehensive comparisons and ablations with current state-of-the-art online and offline KD techniques.
translated by 谷歌翻译
A canonical algorithm for log-concave sampling is the Langevin Algorithm, aka the Langevin Diffusion run with some discretization stepsize $\eta > 0$. This discretization leads the Langevin Algorithm to have a stationary distribution $\pi_{\eta}$ which differs from the stationary distribution $\pi$ of the Langevin Diffusion, and it is an important challenge to understand whether the well-known properties of $\pi$ extend to $\pi_{\eta}$. In particular, while concentration properties such as isoperimetry and rapidly decaying tails are classically known for $\pi$, the analogous properties for $\pi_{\eta}$ are open questions with direct algorithmic implications. This note provides a first step in this direction by establishing concentration results for $\pi_{\eta}$ that mirror classical results for $\pi$. Specifically, we show that for any nontrivial stepsize $\eta > 0$, $\pi_{\eta}$ is sub-exponential (respectively, sub-Gaussian) when the potential is convex (respectively, strongly convex). Moreover, the concentration bounds we show are essentially tight. Key to our analysis is the use of a rotation-invariant moment generating function (aka Bessel function) to study the stationary dynamics of the Langevin Algorithm. This technique may be of independent interest because it enables directly analyzing the discrete-time stationary distribution $\pi_{\eta}$ without going through the continuous-time stationary distribution $\pi$ as an intermediary.
translated by 谷歌翻译